Goto

Collaborating Authors

 explicit deepfake


California AG sends cease and desist to xAI over Grok's explicit deepfakes

Engadget

How to claim Verizon's $20 outage credit California AG sends cease and desist to xAI over Grok's explicit deepfakes Rob Bonta's office is also investigating xAI over nonconsensual nude images and sexualized images of children. If you'll recall, xAI and Grok have been under fire for taking images of real individuals and putting them in revealing clothing like bikinis upon random users' requests. Bonta's office demands that xAI immediately cease and desist from creating "digitized sexually explicit material" when the depicted individual didn't consent to it or if the individual is a minor. It also demanded that xAI stop "facilitating or aiding and abetting the creation or publication of digitized sexually explicit material" of nonconsenting individuals and persons under 18 years of age. X changed its policies after the issue broke out and prevented the Grok account from being able to edit images of real people into revealing clothing.


Grok Is Pushing AI 'Undressing' Mainstream

WIRED

During a two-hour period on December 31, the analyst gathered more than 15,000 URLs of images created by Grok and screen-recorded the chatbots' "media" tab on X, where generated images--both sexualized and non-sexualized--are posted. WIRED reviewed more than a third of the URLs that the researcher gathered and found that over 2,500 were no longer available, and nearly 500 were marked as "age-restricted adult content," requiring a login to view. Many of the remaining posts still featured scantily clad women.


Congress Passed a Sweeping Free-Speech Crackdown--and No One's Talking About It

Slate

Sign up for the Slatest to get the most insightful analysis, criticism, and advice out there, delivered to your inbox daily. Had you scanned any of the latest headlines around the TAKE IT DOWN Act, legislation that President Donald Trump signed into law Monday, you would have come away with a deeply mistaken impression of the bill and its true purpose. The surface-level pitch is that this is a necessary law for addressing nonconsensual intimate images--known more widely as revenge porn. Obfuscating its intent with a classic congressional acronym (Tools to Address Known Exploitation by Immobilizing Technological Deepfakes on Websites and Networks), the TAKE IT DOWN Act purports to help scrub the internet of exploitative, nonconsensual sexual media, whether real or digitally mocked up, at a time when artificial intelligence tools and automated image generators have supercharged its spread. Enforcement is delegated to the Federal Trade Commission, which will give online communities that specialize primarily in user-generated content (e.g., social media, message boards) a heads-up and a 48-hour takedown deadline whenever an appropriate example is reported.


Google makes it easier to remove explicit deepfakes from its search results

Engadget

Google has rolled out updates for Search with the intention of making explicit deepfakes as hard to find as possible. As part of its long-standing and ongoing fight against realistic-looking manipulated images, the company is making it easier for people to get non-consensual fake imagery that features them removed from Search. It has long been possible for users to request for the removal of those kinds of images under Google's policies. Now, whenever it grants someone's removal request, Google will also filter all explicit results on similar searches about them. The company's systems will scan for any duplicates of the offending image and remove them, as well.


Dear Taylor Swift, we're sorry about those explicit deepfakes

MIT Technology Review

I can only imagine how you must be feeling after sexually explicit deepfake videos of you went viral on X. Disgusted. I'm really sorry this is happening to you. Nobody deserves to have their image exploited like that. But if you aren't already, I'm asking you to be furious. Furious that this is happening to you and so many other women and marginalized people around the world. Furious that our current laws are woefully inept at protecting us from violations like this.


X blocks Taylor Swift searches: What to know about the viral AI deepfakes

Al Jazeera

Social media platform X has blocked searches for one of the world's most popular personalties, Taylor Swift, after explicit artificial intelligence images of the singer-songwriter went viral. The deepfakes flooded several social media sites from Reddit to Facebook. This has renewed calls to strengthen legislation around AI, particularly when it is misused for sexual harassment. Here's what you need to know about the Swift episode and legality around deepfakes. On Wednesday, AI-generated, sexually explicit images began circulating on social media sites, particularly gaining traction on X.


Twitch takes a harder stance against explicit deepfakes

Engadget

Twitch already forbids explicit deepfake images and videos, but it's taking a tougher position against them today. The livestreaming service is updating its policy on adult nudity to include a ban on synthetic NCEI (non-consensual exploitative images), even if it's only shown briefly or to criticize its existence. It's also revising ts sexual violence and exploitation policies to make clear that intentionally making and sharing non-consensual deepfakes can lead to a ban with the first offense. The policy changes should take effect within the next month. The company hopes the added clarity and modernized language will deter potential offenders.